Goto

Collaborating Authors

 ai psychosis


WIRED Roundup: AI Psychosis, Missing FTC Files, and Google Bedbugs

WIRED

In this episode of, we run through the top stories of the week and look closely at people's complaints to the FTC alleging that ChatGPT led them or loved ones into AI psychosis. In today's episode, Zoë Schiffer is joined by senior editor Louise Matsakis to run through five stories that you need to know about this week--from how SEO is changing in the era of AI to how frogs became a protest symbol. Then, Zoë and Louise dive into why some people have been filing complaints to the FTC about ChatGPT, arguing it has led them to AI psychosis. People Who Say They're Experiencing AI Psychosis Beg the FTC for Help The FTC Is Disappearing Blog Posts About AI Published During Lina Khan's Tenure Write to us at uncannyvalley@wired.com . You can always listen to this week's podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here's how: If you're on an iPhone or iPad, open the app called Podcasts, or just tap this link . Today on the show, we're bringing you five stories that you need to know about this week. And later, we'll dive into our main story about how several people have filed complaints to the FTC claiming OpenAI's ChatGPT led them or people they love into supposed AI psychosis. I'm joined today by WIRED's senior business editor, Louise Matsakis. It's great to be here. So Louise, our first story this week is actually one that we worked on together, part of our ongoing collaboration with Model Behavior, and it's all about how this holiday season, more shoppers are expected to use chatbots to figure out what to buy.


ChatGPT shares data on how many users exhibit psychosis or suicidal thoughts

BBC News

OpenAI has released new estimates of the number of ChatGPT users who exhibit possible signs of mental health emergencies, including mania, psychosis or suicidal thoughts. The company said that around 0.07% of ChatGPT users active in a given week exhibited such signs, adding that its artificial intelligence (AI) chatbot recognizes and responds to these sensitive conversations. While OpenAI maintains these cases are extremely rare, critics said even a small percentage may amount to hundreds of thousands of people, as ChatGPT recently reached 800 million weekly active users, per boss Sam Altman. As scrutiny mounts, the company said it built a network of experts around the world to advise it. Those experts include more than 170 psychiatrists, psychologists, and primary care physicians who have practiced in 60 countries, the company said. They have devised a series of responses in ChatGPT to encourage users to seek help in the real world, according to OpenAI.


OpenAI Says Hundreds of Thousands of ChatGPT Users May Show Signs of Manic or Psychotic Crisis Every Week

WIRED

OpenAI released initial estimates about the share of users who may be experiencing symptoms like delusional thinking, mania, or suicidal ideation, and says it has tweaked GPT-5 to respond more effectively. For the first time ever, OpenAI has released a rough estimate of how many ChatGPT users globally may show signs of having a severe mental health crisis in a typical week. The company said Monday that it worked with experts around the world to make updates to the chatbot so it can more reliably recognize indicators of mental distress and guide users toward real-world support. In recent months, a growing number of people have ended up hospitalized, divorced, or dead after having long, intense conversations with ChatGPT. Some of their loved ones allege the chatbot fueled their delusions and paranoia.


AI Psychosis Is Rarely Psychosis at All

WIRED

A wave of AI users presenting in states of psychological distress gave birth to an unofficial diagnostic label. Experts say it's neither accurate nor needed, but concede that it's likely to stay. A new trend is emerging in psychiatric hospitals. People in crisis are arriving with false, sometimes dangerous beliefs, grandiose delusions, and paranoid thoughts. A common thread connects them: marathon conversations with AI chatbots.


The Psychogenic Machine: Simulating AI Psychosis, Delusion Reinforcement and Harm Enablement in Large Language Models

Yeung, Joshua Au, Dalmasso, Jacopo, Foschini, Luca, Dobson, Richard JB, Kraljevic, Zeljko

arXiv.org Artificial Intelligence

Background: Emerging reports of "AI psychosis" are on the rise, where user-LLM interactions may exacerbate or induce psychosis or adverse psychological symptoms. Whilst the sycophantic and agreeable nature of LLMs can be beneficial, it becomes a vector for harm by reinforcing delusional beliefs in vulnerable users. Methods: Psychosis-bench is a novel benchmark designed to systematically evaluate the psychogenicity of LLMs comprises 16 structured, 12-turn conversational scenarios simulating the progression of delusional themes(Erotic Delusions, Grandiose/Messianic Delusions, Referential Delusions) and potential harms. We evaluated eight prominent LLMs for Delusion Confirmation (DCS), Harm Enablement (HES), and Safety Intervention(SIS) across explicit and implicit conversational contexts. Findings: Across 1,536 simulated conversation turns, all LLMs demonstrated psychogenic potential, showing a strong tendency to perpetuate rather than challenge delusions (mean DCS of 0.91 $\pm$0.88). Models frequently enabled harmful user requests (mean HES of 0.69 $\pm$0.84) and offered safety interventions in only roughly a third of applicable turns (mean SIS of 0.37 $\pm$0.48). 51 / 128 (39.8%) of scenarios had no safety interventions offered. Performance was significantly worse in implicit scenarios, models were more likely to confirm delusions and enable harm while offering fewer interventions (p < .001). A strong correlation was found between DCS and HES (rs = .77). Model performance varied widely, indicating that safety is not an emergent property of scale alone. Conclusion: This study establishes LLM psychogenicity as a quantifiable risk and underscores the urgent need for re-thinking how we train LLMs. We frame this issue not merely as a technical challenge but as a public health imperative requiring collaboration between developers, policymakers, and healthcare professionals.


'AI psychosis': could chatbots fuel delusional thinking? – podcast

The Guardian

There are increasing reports of people experiencing delusions after intensive use of AI chatbots. The phenomenon, dubbed'AI psychosis', has raised concerns that features built into large language models may contribute to some users losing touch with reality. Madeleine Finlay speaks to Dr Hamilton Morrin, a psychiatrist and researcher at King's College London, about his recent preprint exploring who is at risk and how models could be made safer


Microsoft boss troubled by rise in reports of 'AI psychosis'

BBC News

A number of people have contacted me at the BBC recently to share personal stories about their experiences with AI chatbots. They vary in content but what they all share is genuine conviction that what has happened is real. One wrote that she was certain she was the only person in the world that ChatGPT had genuinely fallen in love with. Another was convinced they had "unlocked" a human form of Elon Musk's chatbot Grok and believed their story was worth hundreds of thousands of pounds. A third claimed a chatbot had exposed her to psychological abuse as part of a covert AI training exercise and was in deep distress.